Session G-3

5G and mmW Networks

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

A Comparative Measurement Study of Commercial 5G mmWave Deployments

Arvind Narayanan (University of Minnesota, USA); Muhammad Iqbal Rochman (University of Chicago, USA); Ahmad Hassan (University of Minnesota, USA); Bariq S. Firmansyah (Institut Teknologi Bandung, Indonesia); Vanlin Sathya (University of Chicago, USA); Monisha Ghosh (University Of Chicago, USA); Feng Qian (University of Minnesota, Twin Cities, USA); Zhi-Li Zhang (University of Minnesota, USA)

1
5G NR is beginning to be widely deployed in the mmWave frequencies in urban areas in the US and around the world. Due to the directional nature of mmWave propagation, beam management and deployment configurations are crucial for improving performance. We perform detailed measurements of mmWave 5G NR deployments by two major US operators (OpX and OpY) in two diverse environments: an open field with a baseball park (BP) and a downtown urban canyon region (DT), using smartphone-based tools that collect detailed measurements across several layers (PHY, MAC and up) such as beam-specific metrics like signal strength, beam switch times, and throughput per beam. Our measurement analysis shows that the parameters of the two deployments differ in a number of aspects: number
of beams used, number of channels aggregated, and density of deployments, which reflect on the throughput performance. Our measurement-driven propagation analysis demonstrates that narrower beams experience a lower path-loss exponent than wider beams, which combined with up to eight frequency channels
aggregated on up to eight beams can deliver a peak throughput of 1.2 Gbps at distances greater than 100 m.

AI in 5G: The Case of Online Distributed Transfer Learning over Edge Networks

Yulan Yuan (Beijing University of Posts and Telecommunications, China); Lei Jiao (University of Oregon, USA); Konglin Zhu (Beijing University of Posts and Telecommunications, China); Xiaojun Lin (Purdue University, USA); Lin Zhang (Beijing University of Posts and Telecommunications, China)

0
Transfer learning does not train from scratch but leverages existing models to help train the new model of better accuracy. Unfortunately, realizing transfer learning in distributed cloud-edge networks faces critical challenges such as online training, uncertain network environments, time-coupled control decisions, and the balance between resource consumption and model accuracy. We formulate distributed transfer learning as a non-linear mixed-integer program of long-term cost optimization. We design polynomial-time online algorithms by exploiting the real-time trade-off between preserving previous decisions and applying new decisions, based on primal-dual one-shot solutions for each single time slot. While orchestrating model placement, data dispatching, and inference aggregation, our approach produces new models via combining the existing offline models and the online models being trained using weights adaptively updated based on inference upon data samples that dynamically arrive. Our approach provably incurs the number of inference mistakes no greater than a constant times that of the single best model in hindsight, and achieves a constant competitive ratio for the total cost. Evaluations have confirmed the superior performance of our approach compared to alternatives on real-world traces.

mmPhone: Acoustic Eavesdropping on Loudspeakers via mmWave-characterized Piezoelectric Effect

Chao Wang, Feng Lin, Tiantian Liu, Ziwei Liu, Yijie Shen, Zhongjie Ba and Li Lu (Zhejiang University, China); Wenyao Xu (SUNY Buffalo & Wireless Health Institute, USA); Kui Ren (Zhejiang University, China)

0
More and more people turn to online voice communication with loudspeaker-equipped devices due to its convenience. To prevent speech leakage, soundproof rooms are often adopted. This paper presents mmPhone, a novel acoustic eavesdropping system that recovers loudspeaker speech protected by soundproof environments. The key idea is that properties of piezoelectric films in mmWave band can change with sound pressure due to the piezoelectric effect. If the property changes are acquired by an adversary (i.e., characterizing the piezoelectric effect with mmWaves), speech leakage can happen. More importantly, the piezoelectric film can work without a power supply. Base on this, we proposed a methodology using mmWaves to sense the film and decoding the speech from mmWaves, which turns the film into a passive "microphone". To recover intelligible speech, we further develop an enhancement scheme based on a denoising neural network, multi-channel augmentation, and speech synthesis, to compensate for the propagation and penetration loss of mmWaves. We perform extensive experiments to evaluate mmPhone and conduct digit recognition with over 93% accuracy. The results indicate mmPhone can recover high-quality and intelligible speech from a distance over 5m and is resilient to incident angles of sound waves (within 55 degrees) and different types of loudspeakers.

Optimizing Coverage with Intelligent Surfaces for Indoor mmWave Networks

Jingyuan Zhang and Douglas Blough (Georgia Institute of Technology, USA)

1
Reconfigurable intelligent surfaces (RISs) have been proposed to increase coverage in millimeter-wave networks by providing an indirect path from transmitter to receiver when the light-of-sight (LoS) path is blocked. In this paper, the problem of optimizing the locations and orientations of multiple RISs is considered for the first time. An iterative coverage expansion algorithm based on gradient descent is proposed for indoor scenarios where obstacles are present. The goal of this algorithm is to maximize coverage within the shadowed regions where there is no LoS path to the access point. The algorithm is guaranteed to converge to a local coverage maximum and is combined with an intelligent initialization procedure to improve the performance and efficiency of the approach. Numerical results demonstrate that, in dense obstacle environments, the proposed algorithm doubles coverage compared to a solution without RISs and provides about a 10% coverage increase compared to a brute force sequential RIS placement approach.

Session Chair

Xiaojun Lin (Purdue University)

Session G-4

Algorithms 1

Conference
12:00 PM — 1:30 PM EDT
Local
May 4 Wed, 12:00 PM — 1:30 PM EDT

Copa+: Analysis and Improvement of thedelay-based congestion control algorithm Copa

Wanchun Jiang, Haoyang Li, Zheyuan Liu, Jia Wu and Jiawei Huang (Central South University, China); Danfeng Shan (Xi'an Jiaotong University, China); Jianxin Wang (Central South University, China)

0
Copa is a delay-based congestion control algorithm proposed in NSDI recently. It can achieve consistent high performance under various network environments and has already been deployed in Facebook. In this paper, we theoretically analyze Copa and reveal its large queuing delay and poor fairness issue under certain conditions. The root cause is that Copa fails to clear the bottleneck buffer occupancy periodically as expected. Accordingly, Copa may get a wrong base RTT estimation and enter its competitive mode by mistake, leading to large delay and unfairness. To address these issues, we propose Copa+, which enhances Copa with a parameter adaptation mechanism and an optimized competitive mode entrance criterion. Designed based on our theoretical analysis, Copa+ can adaptively clear the bottleneck buffer occupancy for correct estimation of base RTT. Consequently, Copa+ inherits the advantages of Copa but achieves lower queuing delay and better fairness under different environments, as confirmed by the real-world experiments and simulations. Specifically, Copa+ has the highest throughput similar to Copa but 11.9% lower queuing delay over different Internet links among different cloud nodes, and achieves 39.4% lower queuing delay and 8.9% higher throughput compared to Sprout over emulated cellular links.

Learning for Robust Combinatorial Optimization: Algorithm and Application

Zhihui Shao (UC Riverside, USA); Jianyi Yang (University of California, Riverside, USA); Cong Shen (University of Virginia, USA); Shaolei Ren (University of California, Riverside, USA)

1
Learning to optimize (L2O) has recently emerged as a promising approach to solving optimization problems by exploiting the strong prediction power of neural networks and offering lower runtime complexity than conventional solvers. While L2O has been applied to various problems, a crucial yet challenging class of problems - robust combinatorial optimization in the form of minimax optimization - have largely remained under-explored. In addition to the exponentially large decision space, a key challenge for robust combinatorial optimization lies in the
inner optimization problem, which is typically non-convex and entangled with outer optimization. In this paper, we study robust combinatorial optimization and propose a novel learning-based optimizer, called LRCO (Learning for Robust Combinatorial Optimization), which quickly outputs a robust solution in the presence of uncertain context. LRCO leverages a pair of learning-based optimizers - one for the minimizer and the other for the maximizer - that use their respective objective functions as losses and can be trained without the need of labels for training problem instances. To evaluate the performance of LRCO, we perform simulations for the task offloading problem in vehicular edge computing. Our results highlight that LRCO can greatly reduce the worst-case cost, with low runtime complexity.

Polynomial-Time Algorithm for the Regional SRLG-disjoint Paths Problem

Balázs Vass (Budapest University of Technology and Economics, Hungary); Erika R. Bérczi-Kovács and Ábel Barabás (Eötvös University, Budapest, Hungary); Zsombor László Hajdú and János Tapolcai (Budapest University of Technology and Economics, Hungary)

0
The current best practice in survivable routing is to compute link or node disjoint paths in the network topology graph. It can protect single-point failures; however, several failure events may cause the interruption of multiple network elements. The set of network elements subject to potential failure events is called Shared Risk Link Group (SRLG), identified during network planning. Unfortunately, for any given list of SRLGs, finding two paths that can survive a single SRLG failure is NP-Complete. In this paper, we provide a polynomial-time SRLG-disjoint routing algorithm for planar network topologies and a large set of SRLGs. Namely, we focus on regional failures, where the failed network elements must not be far from each other. We use a flexible definition of regional failure, where the only restriction is that the topology is a planar graph, and the SRLGs form a set of connected edges in the dual of the planar graph. The proposed algorithm is based on a max-min theorem. Through extensive simulations, we show that the algorithm scales well with the network size, and one of the paths returned by the algorithm is only 4% longer than the shortest path on average.

Provably Efficient Algorithms for Traffic-sensitive SFC Placement and Flow Routing

Yingling Mao, Xiaojun Shang and Yuanyuan Yang (Stony Brook University, USA)

1
Network Function Virtualization (NFV) has the potential of cost-efficiency, manage-convenience, and flexibility services but meanwhile poses challenges for the service function chain (SFC) deployment problem, which is NP-hard. It is so complicated that existing work conspicuously neglects the flow changes along the chains and only gives heuristic algorithms without a performance guarantee. In this paper, we fill this gap by formulating a traffic-sensitive online joint SFC placement and flow routing (TO-JPR) model and proposing a novel two-stage scheme to solve it. Moreover, we design a dynamic segmental packing (DSP) algorithm for the first stage, which not only maintains the minimal traffic burden for the network but also achieves an approximation ratio of 2 on the resource cost. Such a two-stage scheme and DSP can pave the way for efficiently solving TO-JPR. For example, simply applying the nearest neighbor (NN) algorithm for the second stage can guarantee a global approximation ratio of O(log(M)) on the network latency, where M is the number of servers. More future work can be done based on our scheme to get better performance on the network latency. Finally, we perform extensive simulations to demonstrate the outstanding performance of DSP+NN compared with the optimal solutions and benchmarks.

Session Chair

En Wang (Jilin University)

Session G-5

Algorithms 2

Conference
2:30 PM — 4:00 PM EDT
Local
May 4 Wed, 2:30 PM — 4:00 PM EDT

A Unified Model for Bi-objective Online Stochastic Bipartite Matching with Two-sided Limited Patience

Gaofei Xiao and Jiaqi Zheng (Nanjing University, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China)

1
Bi-objective online stochastic bipartite matching can capture a wide range of real-world problems such as online ride-hailing, crowdsourcing markets, and internet adverting, where the vertices in the left side are known in advance and that in the right side arrive from a known identical independent distribution (KIID) in an online manner. Mutual interest and limited attention-span are two common conditions and can be modeled as the edge existence probability and two-sided limited patience. Existing works fail to take them into bi-objective online optimization. This paper establishes a unified model for bi-objective online stochastic bipartite matching that can provide a general tradeoff among the matched edges (OBJ-1) and vertices (OBJ-2). We formulate two linear programs (LP) and accordingly design four LP-based parameterized online algorithms to tradeoff OBJ-1 and OBJ-2, with the best competitive ratio of (0.3528\alpha,0.3528\beta), where \alpha, \beta are two positive input parameters and \alpha + \beta = 1. Our hardness analysis proves that any non-adaptive algorithm cannot achieve (\delta_1, \delta_2)-competitive such that \delta_1 + \delta_2 > 1- \frac{1}{e}. Trace-driven experiments show that our algorithms can always achieve better performance and provide a flexible tradeoff.

Lazy Self-Adjusting Bounded-Degree Networks for the Matching Model

Evgeniy Feder (ITMO University, Russia); Ichha Rathod and Punit Shyamsukha (Indian Institute of Technology Delhi, India); Robert Sama (University of Vienna, Austria); Vitaly Aksenov (ITMO University, Russia); Iosif Salem and Stefan Schmid (University of Vienna, Austria)

0
Self-adjusting networks (SANs) utilize novel optical switching technologies to support dynamic physical network topology reconfiguration. SANs rely on online algorithms to exploit this topological flexibility to reduce the cost of serving network traffic, leveraging locality in the demand. While prior work has shown the potential of SANs, the theoretical guarantees rely on a simplified cost model in which traversing and adjusting a single link has uniform cost.

We initiate the study of online algorithms for SANs in a more realistic cost model, the Matching Model (MM), in which the network topology is given by the union of a constant number of bipartite matchings (realized by optical switches), and in which changing an entire matching incurs a fixed cost \alpha The cost of routing is given by the number of hops packets need to traverse.

Our main result is a lazy topology adjustment method for designing efficient online SAN algorithms in the MM. We design and analyze online SAN algorithms for line, tree, and bounded degree networks in the MM, with cost O(\sqrt{\alpha}) times the cost of reference algorithms in the uniform cost model (SM). We report on empirical results considering publicly available datacenter network traces, that verify the theoretical bounds.

Maximizing h-hop Independently Submodular Functions Under Connectivity Constraint

Wenzheng Xu and Dezhong Peng (Sichuan University, China); Weifa Liang and Xiaohua Jia (City University of Hong Kong, Hong Kong); Zichuan Xu (Dalian University of Technology, China); Pan Zhou (School of CSE, Huazhong University of Science and Technology, China); Weigang Wu and Xiang Chen (Sun Yat-sen University, China)

0
This study is motivated by the maximum connected coverage problem (MCCP), which is to deploy a connected UAV network with given K UAVs in the top of a disaster area such that the number of users served by the UAVs is maximized. The deployed UAV network must be connected, since the received data by a UAV from its served users need to be sent to the Internet through relays of other UAVs. Motivated by this application, in this paper we study a more generalized problem - the h-hop independently submodular maximization problem, where the MCCP problem is one of its special cases with h=4. We propose a (1-1/e)/(2h+3)-approximation algorithm for the h-hop independently submodular maximization problem, where e is the base of the natural logarithm. Then, one direct result is a (1-1/e)/11-approximate solution to the MCCP problem with h=4, which significantly improves its currently best (1-1/e)/32-approximate solution. We finally evaluate the performance of the proposed algorithm for the MCCP problem in the application of deploying UAV networks, and experimental results show that the number of users served by deployed UAVs delivered by the proposed algorithm is up to 12.5% larger than those by existing algorithms.

Optimal Shielding to Guarantee Region-Based Connectivity under Geographical Failures

Binglin Tao, Mingyu Xiao, Bakhadyr Khoussainov and Junqiang Peng (University of Electronic Science and Technology of China, China)

0
As networks and their inter-connectivity grow and become complex, failures in the networks impact society and industries more than ever. In these networks the notion of connectedness is the key to understanding and reasoning about these failures. Traditional studies in improving edge/node connectivity assume that failures occur at random. However, in many scenarios (such as earthquakes, hurricanes, and human-designed attacks on networks) failures are not random, and most traditional methods do not always work. To address this limitation, we consider region-based connectivity to capture the local nature of failures under the geographical failure model, where failures may happen only on edges in a sub-network (region) and we want to shield some edges in regions to protect the connectivity. There may be several regions and in different regions the failures occur independently. Firstly, we establish the NP-hardness of the problem for l regions, answering a question proposed in previous papers. Secondly, we propose a polynomial-time algorithm for the special case of two regions based on the matroid techniques. Furthermore, we design an ILP-based algorithm to solve the problem for l regions. Experimental results on random and real networks show that our algorithms are much faster than previously known algorithms.

Session Chair

Song Fang (University of Oklahoma)

Session G-6

Algorithms 3

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

Ao\(^2\)I: Minimizing Age of Outdated Information to Improve Freshness in Data Collection

Qingyu Liu, Chengzhang Li, Thomas Hou, Wenjing Lou and Jeffrey Reed (Virginia Tech, USA); Sastry Kompella (Naval Research Laboratory, USA)

0
Recently, it has been recognized that there is a serious limitation with the original Age of Information (AoI) metric in terms of quantifying true freshness of information content. A new metric, called Age of Incorrect Information (AoII), has been proposed. By further refining this new metric with practical considerations, we introduce Age of Outdated Information (Ao\(^2\)I) metric. In this paper, we investigate scheduling problem for minimizing Ao\(^2\)I in an IoT data collection network. We derive a theoretical lower bound for the minimum Ao\(^2\)I that any scheduler can achieve. Then we present Heh---a low-complexity online scheduler. The design of Heh is based on the estimation of a novel offline scheduling priority metric in the absence of knowledge of the future. We prove that at each time, transmitting one source with the largest offline scheduling priority metric minimizes Ao\(^2\)I. Through extensive simulations, we show that the lower bound is very tight and that the Ao\(^2\)I obtained by Heh is close-to-optimal.

CausalRD: A Causal View of Rumor Detection via Eliminating Popularity and Conformity Biases

Weifeng Zhang, Ting Zhong and Ce Li (University of Electronic Science and Technology of China, China); Kunpeng Zhang (University of Maryland, USA); Fan Zhou (University of Electronic Science and Technology of China, China)

0
A large amount of disinformation on social media has penetrated into various domains and brought significant adverse effects. Understanding their roots and propagation becomes desired in both academia and industry. Prior literature has developed many algorithms to identify this disinformation, particularly rumor detection. Some leverage the power of deep learning and have achieved promising results. However, they all focused on building predictive models and improving forecast accuracy, while two important factors - popularity and conformity biases - that play critical roles in rumor spreading behaviors are usually neglected.

To overcome such an issue and alleviate the bias from these two factors, we propose a rumor detection framework to learn debiased user preference and effective event representation in a causal view. We first build a graph to capture causal relationships among users, events, and their interactions. Then we apply the causal intervention to eliminate popularity and conformity biases and obtain debiased user preference representation. Finally, we leverage the power of graph neural networks to aggregate learned user representation and event features for the final event type classification. Empirical experiments conducted on two real-world datasets demonstrate the effectiveness of our proposed approach compared to several cutting-edge baselines.

Learning from Delayed Semi-Bandit Feedback under Strong Fairness Guarantees

Juaren Steiger (Queen's University, Canada); Bin Li (The Pennsylvania State University, USA); Ning Lu (Queen's University, Canada)

0
Multi-armed bandit frameworks, including combinatorial semi-bandits and sleeping bandits, are commonly employed to model problems in communication networks and other engineering domains. In such problems, feedback to the learning agent is often delayed (e.g. communication delays in a wireless network or conversion delays in online advertising). Moreover, arms in a bandit problem often represent entities required to be treated fairly, i.e. the arms should be played at least a required fraction of the time. In contrast to the previously studied asymptotic fairness, many real-time systems require such fairness guarantees to hold even in the short-term (e.g. ensuring the credibility of information flows in an industrial Internet of Things (IoT) system). To that end, we develop the Learning with Delays under Fairness (LDF) algorithm to solve combinatorial semi-bandit problems with sleeping arms and delayed feedback, which we prove guarantees strong (short-term) fairness. While previous theoretical work on bandit problems with delayed feedback typically derive instance-dependent regret bounds, this approach proves to be challenging when simultaneously considering fairness. We instead derive a novel instance-independent regret bound in this setting which agrees with state-of-the-art bounds. We verify our theoretical results with extensive simulations using both synthetic and real-world datasets.

Optimizing Sampling for Data Freshness: Unreliable Transmissions with Random Two-way Delay

Jiayu Pan and Ahmed M Bedewy (The Ohio State University, USA); Yin Sun (Auburn University, USA); Ness B. Shroff (The Ohio State University, USA)

0
In this paper, we study a sampling problem, in which freshly sampled data is sent to a remote destination via an unreliable channel, and acknowledgments are sent back on a reverse channel. Both the forward and feedback channels are subject to random transmission times. We optimize the sampling strategy at the source (e.g., a sensor), aiming to enhance the freshness of data samples delivered to the destination (e.g., an estimator). This sampling problem is motivated by a distributed sensing system, where an estimator estimates a signal by combining noisy signal observations collected from a local sensor and accurate signal samples received from a remote sensor. We show that the optimal estimation error is an increasing function of the age of received signal samples. The optimal sampling problem is formulated as an MDP with an uncountable state space. An exact solution to this problem is derived, which has a simple threshold-type structure. The threshold can be calculated by low-complexity bisection search and fixed-point iterations. We find that, after a successful packet delivery, the optimal sampler may wait before taking the next sample and sending it out, whereas no waiting time should be added if the previous transmission failed.

Session Chair

Zhangyu Guan (University at Buffalo)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.